flume agent任务失败
flume日志报错:
开头开发怀疑是kerberos的问题,后来排查发现是每个agent进程jvm内存使用超出配额8192m,但是进程也没有配上超出后自动kill的参数,所以可以运行但是会报错
The flume agents went in OutOfMemoryError (unable to create new native thread) and the impact on the hdfs had been the error posted above
同时hdfs上也会有日志写入出错的记录
hadoop-datanode1:50010ataXceiver error processing WRITE_BLOCK operation
运行mapreduce程序在reduce阶段时出现错误提示: Unable to close file because the last block does not have enough number of replicas.
jmap -heap pid 查看java进程内存使用情况
参考文档